-
Notifications
You must be signed in to change notification settings - Fork 4.8k
HIVE-29376:Using partition spec in DESC FORMATTED sql is unsupported … #6259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…for Iceberg table
| return null; | ||
| } | ||
|
|
||
| private Partition getPartition(Table tab, Map<String, String> partitionSpec) throws HiveException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic here is almost same as getPartition() in DescTableOperation.java https://github.com/apache/hive/pull/6259/changes#diff-641c62b42b01bff41c89a3b3661c15d6c08fce0e48740347f36fe32448984147R131
Check if you can add them in an utility so it can be reused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking it, moved out this logic from DescTableOperation.java and DescTableAnalyzer.java to HiveIcebergStorageHandler.java
https://github.com/apache/hive/pull/6259/changes#diff-93864ecf035fe51b92185015da842a56837cea89064813de39c278c6f8fed03cR2079
please take a look
soumyakanti3578
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DDLUtils.isIcebergTable() has been used in many places in the compiler. I think we should not use a specific table type here.
| } | ||
|
|
||
| private Partition getPartition(Table tab, Map<String, String> partitionSpec) throws HiveException { | ||
| boolean isIcebergTable = DDLUtils.isIcebergTable(tab); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't use DDLUtils.isIcebergTable as this API in the compiler is too specific. Instead please use tab.isNonNative() in conjunction to other APIs if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking it, tried to make changes more generic to non native tables instead of iceberg centric please take a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ramitg254, consider joining with Table.hasNonNativePartitionSupport
| return false; | ||
| } | ||
|
|
||
| default boolean isPartitionPresent(org.apache.hadoop.hive.ql.metadata.Table table, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drop this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually I added it because of these considerations:
-
getPartitionin iceberg returns dummy partition corresponding to that partSpec without checking it is already present or not and this functionality is used in other places like for insert queries for iceberg tables as well that's why I added this separate method -
I also thought of adding this method in
Hive.javainstead of storage handler but that would be too specific and if we have newer storage handler in future then it can have some different implementation to check the partition is present or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getPartition in iceberg returns dummy partition corresponding to that partSpec without checking it is already present or not
exactly. i think we need to change that API and also drop the RewritePolicy policy arg (move Context.RewritePolicy.get(conf) inside the StorageHandler).
getPartition() should return dummy partition corresponding to the provided partSpec only if exists otherwise it confusing.
please check if ATM this method is only used by Iceberg compaction. in that scenario we could skip the validation (i.e. check for SessionState.get().isCompaction())
cc @difin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deniskuzZ updated the getPartition api implementation as per the suggestion and tests are passing without affecting any insertion and compaction related queries. Please have a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but I have a question for @difin regarding whether isPartitionPresent(table, partitionSpec) is necessary in the context of compaction?
iceberg/iceberg-handler/src/test/queries/negative/desc_ice_tbl_partial_part_spec.q
Show resolved
Hide resolved
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
Outdated
Show resolved
Hide resolved
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergStorageHandler.java
Outdated
Show resolved
Hide resolved
|
| try { | ||
| List<String> partNames = getPartitionNames(icebergTable, partitionSpec, false); | ||
| return !partNames.isEmpty() && | ||
| Warehouse.makePartName(partitionSpec, false).equals(partNames.getFirst()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really need
Warehouse.makePartName(partitionSpec, false).equals(partNames.getFirst())
check?
isn't !partNames.isEmpty() enough? we already pass the partitionSpec to getPartitionNames
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But let's say for eg. for partspec is c=6 and parts available are c=6/d=hello6 only, then we would like to throw partition not found exception and that's why .equals is required as well
| Map<String, String> partitionSpec, RewritePolicy policy) throws SemanticException { | ||
| validatePartSpec(table, partitionSpec, policy); | ||
| try { | ||
| Table icebergTable = IcebergTableUtil.getTable(conf, table.getTTable()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getTable() could be moved out of try block. i think same is true for hasPartition check
| List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null; | ||
| List<FieldSchema> partitionColumns = null; | ||
| if (table.isPartitioned()) { | ||
| partitionColumns = table.hasNonNativePartitionSupport() ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure, but maybe we could move hasNonNativePartitionSupport logic inside of Table#getPartCols() , similar to
public List<String> getPartColNames() {
List<FieldSchema> partCols = hasNonNativePartitionSupport() ?
getStorageHandler().getPartitionKeys(this) : getPartCols();
return partCols.stream().map(FieldSchema::getName)
.collect(Collectors.toList());
}
| List<String> values = new ArrayList<String>(); | ||
| for (FieldSchema fs : this.getTable().getPartCols()) { | ||
| for (FieldSchema fs : | ||
| table.hasNonNativePartitionSupport() ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here



…for Iceberg table
What changes were proposed in this pull request?
Adds support for using partition spec with describe statement for iceberg table.
and updated the other test outputs as partition information is also gettting printed for desc statement after the changes
Why are the changes needed?
currently using partition spec with describe statement for iceberg table result in unsupported exception
Does this PR introduce any user-facing change?
yes, this statement will not result in exception anymore
How was this patch tested?
build locally and ci tests and added q tests